TRANSFER LEARNING ON MULTIFIDELITY DATA

نویسندگان

چکیده

Neural networks (NNs) are often used as surrogates or emulators of partial differential equations (PDEs) that describe the dynamics complex systems. A virtually negligible computational cost such makes them an attractive tool for ensemble-based computation, which requires a large number repeated PDE solutions. Since latter also needed to generate sufficient data NN training, usefulness NN-based hinges on balance between training and gain stemming from their deployment. We rely multifidelity simulations reduce generation subsequent deep convolutional (CNN) using transfer learning. High- low-fidelity images generated by solving PDEs fine coarse meshes, respectively. use theoretical results multilevel Monte Carlo method guide our choice numbers each kind. demonstrate performance this strategy problem estimation distribution quantity interest, whose is governed system nonlinear (parabolic multiphase flow in heterogeneous porous media) with uncertain/random parameters. Our numerical experiments mixture comparatively smaller high-fidelity provides optimal speed-up prediction accuracy. The former reported relative both CNN only solution PDEs. expressed terms Wasserstein distance Kullback-Leibler divergence.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Robust Textual Data Streams Mining Based on Continuous Transfer Learning

In textual data stream environment, concept drift can occur at any time, existing approaches partitioning streams into chunks can have problem if the chunk boundary does not coincide with the change point which is impossible to predict. Since concept drift can occur at any point of the streams, it will certainly occur within chunks, which is called random concept drift. The paper proposed an ap...

متن کامل

Overcoming data scarcity with transfer learning

Despite increasing focus on data publication and discovery in materials science and related fields, the global view of materials data is highly sparse. This sparsity encourages training models on the union of multiple datasets, but simple unions can prove problematic as (ostensibly) equivalent properties may be measured or computed differently depending on the data source. These hidden contextu...

متن کامل

On Universal Transfer Learning

In transfer learning the aim is to solve new learning tasks using fewer examples by using information gained from solving related tasks. Existing transfer learning methods have been used successfully in practice and PAC analysis of these methods have been developed. But the key notion of relatedness between tasks has not yet been defined clearly, which makes it difficult to understand, let alon...

متن کامل

Transfer learning with one-class data

When training and testing data are drawn from different distributions, most statistical models need to be retrained using the newly collected data. Transfer learning is a family of algorithms that improves the classifier learning in a target domain of interest by transferring the knowledge from one or multiple source domains, where the data falls in a different distribution. In this paper, we c...

متن کامل

Transfer Incremental Learning Using Data Augmentation

Due to catastrophic forgetting, deep learning remains highly inappropriate when facing incremental learning of new classes and examples over time. In this contribution, we introduce Transfer Incremental Learning using Data Augmentation (TILDA). TILDA combines transfer learning from a pre-trained Deep Neural Network (DNN) as feature extractor, a Nearest Class Mean (NCM) inspired classifier and m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of machine learning for modeling and computing

سال: 2022

ISSN: ['2689-3967', '2689-3975']

DOI: https://doi.org/10.1615/jmachlearnmodelcomput.2021038925